Malliavin calculus

The Malliavin calculus, named after Paul Malliavin, extends the calculus of variations from functions to stochastic processes. The Malliavin calculus is also called the stochastic calculus of variations. In particular, it allows the computation of derivatives of random variables.

Malliavin invented his calculus to provide a stochastic proof that Hörmander's condition implies the existence of a density for the solution of a stochastic differential equation; Hörmander's original proof was based on the theory of partial differential equations. His calculus enabled Malliavin to prove regularity bounds for the solution's density. The calculus has been applied to stochastic partial differential equations.

The calculus allows integration by parts with random variables; this operation is used in mathematical finance to compute the sensitivities of financial derivatives. The calculus has applications for example in stochastic filtering.

Contents

Overview and history

Paul Malliavin's stochastic calculus of variations extends the calculus of variations from functions to stochastic processes. In particular, it allows the computation of derivatives of random variables.

Malliavin invented his calculus to provide a stochastic proof that Hörmander's condition implies the existence of a density for the solution of a stochastic differential equation; Hörmander's original proof was based on the theory of partial differential equations. His calculus enabled Malliavin to prove regularity bounds for the solution's density. The calculus has been applied to stochastic partial differential equations.

Invariance principle

The usual invariance principle for Lebesgue integration over the whole real line is that, for any real number h and integrable function f, the following holds

 \int_{-\infty}^\infty f(x)\, d \lambda(x) = \int_{-\infty}^\infty f(x%2Bh)\, d \lambda(x) .

This can be used to derive the integration by parts formula since, setting f = gh and differentiating with respect to h on both sides, it implies

 \int_{-\infty}^\infty f' \,d \lambda = \int_{-\infty}^\infty (gh)' \,d \lambda = \int_{-\infty}^\infty g h'\, d \lambda %2B
\int_{-\infty}^\infty g' h\, d \lambda.

A similar idea can be applied in stochastic analysis for the differentiation along a Cameron-Martin-Girsanov direction. Indeed, let h_s be a square-integrable predictable process and set

 \varphi(t) = \int_0^t h_s\, d s .

If X is a Wiener process, the Girsanov theorem then yields the following analogue of the invariance principle:

 E(F(X %2B \varepsilon\varphi))= E \left [F(X) \exp \left ( \varepsilon\int_0^1 h_s\, d X_s -
\frac{1}{2}\varepsilon^2 \int_0^1 h_s^2\, ds \right ) \right ].

Differentiating with respect to ε on both sides and evaluating at ε=0, one obtains the following integration by parts formula:

E(\langle DF(X), \varphi\rangle) = E\Bigl[ F(X) \int_0^1 h_s\, dX_s\Bigr].

Here, the left-hand side is the Malliavin derivative of the random variable F in the direction \varphi and the integral appearing on the right hand side should be interpreted as an Itô integral. This expression remains true (by definition) also if h is not adapted, provided that the right hand side is interpreted as a Skorokhod integral.

Clark-Ocone formula

One of the most useful results from Malliavin calculus is the Clark-Ocone theorem, which allows the process in the martingale representation theorem to be identified explicitly. A simplified version of this theorem is as follows:

For F: C[0,1] \to \R satisfying  E(F(X)^2) < \infty which is Lipschitz and such that F has a strong derivative kernel, in the sense that for \varphi in C[0,1]

 \lim_{\varepsilon \to 0} (1/\varepsilon)(F(X%2B\varepsilon \varphi) - F(X) ) = \int_0^1 F'(X,dt) \varphi(t)\ \mathrm{a.e.}\ X

then

F(X) = E(F(X)) %2B \int_0^1 H_t \,d X_t ,

where H is the previsible projection of F'(x, (t,1]) which may be viewed as the derivative of the function F with respect to a suitable parallel shift of the process X over the portion (t,1] of its domain.

This may be more concisely expressed by

F(X) = E(F(X))%2B\int_0^1 E (D_t F | \mathcal{F}_t ) \, d X_t .

Much of the work in the formal development of the Malliavin calculus involves extending this result to the largest possible class of functionals F by replacing the derivative kernel used above by the "Malliavin derivative" denoted D_t in the above statement of the result.

Skorokhod integral

The Skorokhod integral operator which is conventionally denoted δ is defined as the adjoint of the Malliavin derivative thus for u in the domain of the operator which is a subset of L^2([0,\infty) \times \Omega), for F in the domain of the Malliavin derivative, we require

 E (\langle DF, u \rangle ) = E (F \delta (u) ),

where the inner product is that on L^2[0,\infty) viz

 \langle f, g \rangle = \int_0^\infty f(s) g(s) \, ds.

The existence of this adjoint follows from the Riesz representation theorem for linear operators on Hilbert spaces.

It can be shown that if u is adapted then

 \delta(u) = \int_0^\infty u_t\, d W_t ,

where the integral is to be understood in the Itô sense. Thus this provides a method of extending the Itô integral to non adapted integrands.

Applications

The calculus allows integration by parts with random variables; this operation is used in mathematical finance to compute the sensitivities of financial derivatives. The calculus has applications for example in stochastic filtering.

References

External links